Perceptions of perceptual symbols

نویسنده

  • Lawrence W. Barsalou
چکیده

Various defenses of amodal symbol systems are addressed, including amodal symbols in sensory-motor areas, the causal theory of concepts, supramodal concepts, latent semantic analysis, and abstracted amodal symbols. Various aspects of perceptual symbol systems are clarified and developed, including perception, features, simulators, category structure, frames, analogy, introspection, situated action, and development. Particular attention is given to abstract concepts, language, and computational mechanisms. I am grateful for the time and energy that the commentators put into reading and responding to the target article. They raised many significant issues and made many excellent suggestions. Even the strongest criticisms were useful in clarifying misunderstandings and in addressing matters of importance. I have organized the commentaries into two general groups: (1) those that defended amodal symbol systems and (2) those that addressed the properties of perceptual symbol systems. Because the second group was so large, I have divided it into four smaller sections. In the first, I address a wide variety of issues surrounding perceptual symbol systems that include perception, features, simulators, frames, introspection, and so forth. The final three sections address the topics raised most often: abstract concepts, language, and computational mechanisms. R1. Defense of amodal symbol systems No commentator responded systematically to the criticisms of amodal symbol systems listed in sections 1.2.2 and 1.2.3 of the target article. Little direct evidence exists for amodal symbols; it is difficult to reconcile amodal symbols with evidence from neuroscience; the transduction and grounding of amodal symbols remain unspecified; representing space and time computationally with amodal systems has not been successful; amodal symbol systems are neither parsimonious nor falsifiable. Expecting a single commentator to address all of these concerns in the space provided would certainly be unfair. I was struck, however, by how little attempt overall there was to address them. Having once been deeply invested in amodal symbol systems myself (e.g., Barsalou 1992), I understand how deeply these convictions run. Nevertheless, defending amodal symbol systems against such criticisms seems increasingly important to maintaining their viability. A handful of commentators defended amodal systems in various other ways. Although most of these defenses were concrete and direct, others were vague and indirect. In addressing these defenses, I proceed from most to least concrete. R1.1. Amodal symbols reside in sensory-motor areas of the brain. The defense that struck me as most compelling is the suggestion that amodal symbols reside in sensory-motor regions of the brain (Aydede; Zwaan et al.). A related proposal is that perceptual representations are not constitutive of concepts but only become epiphenomenally active during the processing of amodal symbols (Adams & Campbell). In the target article, I suggested that neuroscience evidence implicates sensory-motor regions of the brain in knowledge representation (sect. 2.1., 2.2, and 2.3). When a lesion exists in a particular sensory-motor region, categories that depend on it for the perceptual processing of their exemplars can exhibit knowledge deficits. Because the perception of birds, for example, depends heavily on visual object processing, damage to the visual system can produce a deficit in category knowledge. Neuroimaging studies of humans with intact brains similarly show that sensory-motor areas become active during the processing of relevant categories. Thus, visual regions become active when accessing knowledge of birds. Several commentators noted correctly that these findings are consistent with the assumptions that (a) amodal symbols represent concepts and that (b) these symbols reside in sensory-motor regions (Adams & Campbell; Aydede; Zwaan et al.). If these assumptions are correct, then damage to a sensory-motor region could produce a deficit in category knowledge. Similarly, sensory-motor regions should become active when people with intact brains process categories. This is an important hypothesis that requires careful empirical assessment. Damasio’s (1989) convergence zones provide one way to frame the issue. According to his view, local associative areas in sensory-motor regions capture patterns of perceptual representation. Later, associative areas reactivate these sensory-motor representations to simulate experience and thereby support cognitive processing. As the quotation from Damasio (1989) in section 2.1 illustrates, he believes that reactivating sensory-motor representations is necessary for representing knowledge – activation in a nearby associative area never stands in for them. In principle, however, activation in local associative areas could stand in for sensory-motor representations during symbolic activity, thereby implementing something along the lines of amodal symbols, with perceptual representations ultimately being epiphenomenal. Behavioral findings argue against this proposal. Studies cited throughout the target article show that perceptual variables predict subjects’ performance on conceptual tasks. For example, Barsalou et al. (1999) report that occlusion affects feature listing, that size affects property verification, and that detailed perceptual form predicts property priming. The view that activation in associative areas represents concepts does not predict these effects or explain them readily. Instead, these effects are more consistent with the view that subjects are reactivating sensory-motor representations. Studies on language comprehension similarly exhibit such effects (sects. 4.1.6, R4.5, and Zwaan et al.). Response/Barsalou: Perceptual symbol systems BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 637 Further empirical evidence is clearly required to resolve this issue with confidence. Neuroimaging studies that assess both sensory-motor and local associative areas during perception and conception should be highly diagnostic. Similarly, lesion studies that assess simultaneous deficits in perception and conception should be useful, as well. If sensory-motor representations are epiphenomenal, then patients should frequently be observed who exhibit deficits in sensory-motor representation but not in corresponding knowledge. R1.2. The causal theory of concepts. Ignoring my explicit proposals to the contrary, Aydede claims that I addressed only the form of perceptual symbols and ignored their reference. From this flawed premise, he proceeds to the conclusion that a causal theory of amodal symbols must be correct. On the contrary, the target article clearly addressed the reference of perceptual symbols and agreed with Aydede that causal relations – not resemblances – are critical to establishing reference. Section 2.4.3 states that simulators become causally linked to physical categories, citing Millikan (1989). Similarly, section 3.2.8 is entirely about the importance of causal mechanisms in establishing the reference of perceptual symbols, citing Dretske (1995). Indeed, this section makes Aydede’s argument that resemblance is inadequate for establishing a symbol’s reference. Although resemblance is often associated with perceptual views of knowledge, I took great pains to dissociate the two. Although I would have valued Aydede’s analysis of my actual proposal, reminding him of it provides an opportunity for elaboration. I agree with the many philosophers who hold that much conceptual knowledge is under the causal or nomic control of physical categories in the environment. Indeed, this assumption underlies all learning theories in cognitive science, from exemplar theories to prototype theories to connectionism. In this spirit, section 2.4.3 proposed that simulators come under the control of the categories in the world they represent. As a person perceives the members of a category, sensory-motor systems are causally driven into particular states, which are captured by various associative areas. The target article does not specify how these causal processes occur, disclaiming explicitly any attempt to provide a theory of perception (sect. 2, para. 1–3). Nor does it make any claim that perceptual states resemble the environment (although they do in those brain regions that are topographically mapped). It is nevertheless safe to assume that causal processes link the environment to perceptual states. The critical claim of perceptual symbol systems is that these causally produced perceptual states, whatever they happen to be, constitute the representational elements of knowledge. Most critically, if the environment is causally related to perceptual states, it is also causally related to symbolic states. Thus, perceptual symbol systems constitute a causal theory of concepts. Simulators arise in memory through the causal process of perceiving physical categories. In contrast, consider the causal relations in the amodal theories that Aydede apparently favors (as do Adams & Campbell). According to these views, physical entities and events become causally related to amodal symbols in the brain. As many critics have noted, however, this approach suffers the problems of symbol grounding. Exactly what is the process by which a physical referent becomes linked to an amodal symbol? If philosophers want to take causal mechanisms seriously, they should provide a detailed process model of this causal history. In attempting to do so, they will discover that perception of the environment is essential (Harnad 1987; 1990; Höffding 1891; Neisser 1967), and that some sort of perceptual representation is necessary to mediate between the environment and amodal symbols. Aydede notes that perceptual representations could indeed be a part of this sequence, yet he fails to consider the implication that potentially follows: Once perceptual representations are included in the causal sequence, are they sufficient to support a fully functional conceptual system? As I argue in the target article, they are. If so, then why include an additional layer of amodal symbols in the causal sequence, especially given all the problems they face? Nothing in Aydede’s commentary makes a case for their existence or necessity. R1.3. Supramodal systems for space and time. Two commentaries suggest that spatial processing and temporal processing reside in amodal systems (Freksa et al.; Mitchell & Clement). According to this view, a single spatial system subserves multiple modalities, allowing coordination and matching across them. Thus, the localization of a sound can be linked to an eye movement that attempts to see the sound’s source. Similarly, a single temporal system subserves multiple modalities, coordinating the timing of perception and action. Because the spatial and temporal systems are not bound to any one modality, they are amodal, thereby constituting evidence for amodal symbol systems. To my knowledge, the nature of spatial and temporal systems remains an open question. To date, there is no definitive answer as to whether each modality has its own spatial and temporal processing system, or whether general systems serve all modalities. Indeed, both modality-specific and modality-general systems may well exist. Ultimately, this is an empirical question. Imagine, however, that domain-general systems are found in the brain. In the framework of the target article, they would not constitute amodal systems. As defined in section 1.2, an amodal symbol system contains representations that are transduced from perceptual states – they do not contain representations from perception. If domaingeneral systems for perceiving space and time exist, their representations do not satisfy this definition for amodal symbols. To see this, consider the representations that would arise in domain-general systems during perception. Although these representations are not modality-specific, they are nevertheless perceptual. They constitute fundamental parts of perception that integrate the specific modalities – what I will call supramodal representations. To appreciate why these representations are not amodal, consider the representation of space and time during conceptual processing. If supramodal representations in perception are also used to represent space and time during conceptual processing, they constitute perceptual symbols. Amodal symbols for space and time would only exist if supramodal representations during perception were transduced into a new representation language that supports conceptual processing. Should no such transduction process occur, supramodal representations of time and space form central components of perceptual symbol systems. R1.4. Latent semantic analysis (LSA). Based on my arguments about artificial intelligence (sect. 4.4), Landauer Response/Barsalou: Perceptual symbol systems 638 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 takes me to mean that no silicon-based system could acquire human knowledge. He then shows how computers that implement LSA mimic humans on a wide variety of knowledge-based tasks. I never claimed, however, that amodal systems cannot mimic humans behaviorally! Indeed, section 1.2 noted that amodal symbols have so much power that they can probably mimic all human behavior. My claim in section 4.4 was different, namely, if knowledge is grounded in sensory-motor mechanisms, then humans and computers will represent knowledge differently, because their sensory-motor systems differ so radically. I acknowledged the possibility that future technological developments might produce computers with sensory-motor systems closer to ours, in which case their knowledge could be represented similarly. No one is currently under the illusion, however, that the input-output systems of today’s computers are anything like human sensory-motor systems. Thus, if knowledge is implemented in perceptual systems, it would have to take very different forms. In making his case for LSA, Landauer fails to address the problems I raise for amodal symbol systems in section 1.2: (1) What is the direct evidence that co-occurrence frequency between words controls performance? A system based on co-occurrence frequencies can mimic human behavior, thereby providing indirect evidence that this construct is useful, but is there evidence implicating this basic unit of analysis directly? (2) What is the neural story? Do lesion and neuroimaging data provide support? (3) How are perception and conception linked? Everyone would agree that a conceptual system helps us interpret perceived events in the world, but how do word co-occurrences map onto perceived individuals in the world? When we perceive an object, how is it categorized using LSA? When we conceptualize an object, how do we find its referent in the world? (4) How does LSA represent knowledge about space and time? It is difficult to see how a system that only tracks word co-occurrence can represent these basic aspects of perception. Other concerns about LSA arise, as well. How does LSA accomplish the conceptual functions addressed in the target article, such as distinguishing types and tokens, implementing productivity, and representing propositions? Using only word correlations, it does not seem likely that it can implement these basic human abilities. Finally, are co-occurrences tracked only between words, or between corresponding amodal symbols, as well? If the latter is the case, what is the nature of this amodal system, and how is it related to language? It is important to note that correlation – not causation – underlies LSA’s accounts of human data. Because word cooccurrence exists in the world and is not manipulated experimentally, infinitely many potential variables are confounded with it. To appreciate this problem, recall how LSA works. On a given task, the likelihood of a particular response (typically a word) is predicted by how well it correlates with the overall configuration of stimulus elements (typically other words). Thus, responses that are highly correlated with stimulus elements are more likely to be produced than less correlated responses. The problem is that many, many other potential variables might well be correlated with word co-occurrence, most notably, the perceptual similarity of the things to which the words refer. It is an interesting finding that computer analyses of word correlations produce a system that mimics human behaviors. It does not follow, though, that these correlations are causally important. Of course the human brain could work like LSA, but to my knowledge, no evidence exists to demonstrate this, or to rule out that variables correlated with word cooccurrence are the critical causal factors. A number of empirical findings suggest that knowledge is not grounded in word co-occurrence. First, aphasics typically lose only language; they do not lose knowledge (Lowenthal). If knowledge were simply represented in word co-occurrence, they should lose knowledge, as well. Second, Glenberg reports that perceptual affordances enter into language comprehension, and that word co-occurrence cannot explain these results (also see sects. 4.1.6 and R4.5). Landauer believes that this is because words for certain idioms, metaphors, and modern phrases have not been included in the language corpus that underlies LSA, but again the problem of correlation versus causation arises. Landauer wants to believe that these missing entries have causal significance, when actually their status is merely correlational. Because Glenberg’s affordances are also correlational, more powerful laboratory techniques are needed to tease these factors apart. To the extent that sensory-motor affordances are implicated in conceptual processing, however, LSA has no way of handling them. In this spirit, Solomon and Barsalou (1999a; 1999b) controlled the strength of lexical associations in the property verification task and found effects of perceptual variables. In a task involving only linguistic stimuli, nonlinguistic factors significantly affected performance when linguistic factors were held constant. One way to extend LSA would be to apply its basic associative mechanism to perceptual elements. Just as cooccurrences of words are tracked, so are co-occurrences in perception. Depending on the specific details of the formulation, such an approach could be virtually identical to the target article’s proposal about frames. As described in section 2.5., a frame accumulates components of perception isolated by selective attention, with the associative strength between components reflecting how often they are processed together. Thus, the co-occurrence of perceptual components lies at the heart of these perceptually grounded frames. Notably, however, it is proposed that these structures are organized by space and time, not simply by associative strength. To later simulate perceptual experience, the mere storage of associations will not be sufficient – extensive perceptual structure will be necessary, as well. If so, a simple extension of LSA from words to perception probably will not work. Furthermore, we have known since Chomsky (1957) that the human brain is not simply a generalization mechanism. Besides generalizing around known exemplars, the brain produces infinite structures from finite elements productively. Not only is this true of language, it is also true of conception (Fodor & Pylyshyn 1988). People can conceive of concepts that they have never encountered and that are not generalizations of past concepts. Neither a word-based nor a perception-based version of LSA can implement productivity. There is no doubt that the human brain is an associative device to a considerable extent, yet it is also a productive device. Although LSA may explain the associative aspects of cognition, it appears to have no potential for explaining the productive aspects. Finally, in a superb review of picture-word processing, Glaser (1992) concluded that a mixture of two systems accounts for the general findings in these paradigms. Under Response/Barsalou: Perceptual symbol systems BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 639 certain conditions, a system of word associations explains performance; under other conditions, a conceptual system with a perceptual character explains performance; under still other conditions, mixtures of these two systems are responsible. Based on my own research, and also on reviews of other relevant literature, I have become increasingly convinced of Glaser’s general thesis, although perhaps in a somewhat more radical form: People frequently use word associations and perceptual simulations to perform a wide variety of tasks, with the mixture depending on task circumstances (e.g., Barsalou et al. 1999; Solomon & Barsalou 1999a; 1999b; Wu & Barsalou 1999). Furthermore, I have become convinced that LSA provides an excellent account of the word association system (as does Hyperspace Analogue to Language, HAL; Burgess & Lund 1997). When people actually use word associations to perform a task, LSA and HAL provide excellent accounts of the underlying processing. However, when a task demands true conceptual processing, or when the word association strategy is blocked, the conceptual representations required are implemented through perceptual simulation. Although such simulations may rely on structures of associated perceptual components, they also rely on nonassociative mechanisms of perception and productivity. R1.5. Amodal symbols arise through abstraction. Adopting the framework of connectionism, Gabora argues that amodal symbols arise as similar experiences are superimposed onto a common set of units. As idiosyncratic features of individual experiences cancel each other out, shared features across experiences coalesce into an attractor that represents the set abstractly. I strongly agree that such learning mechanisms are likely to exist in the brain, and that they are of considerable relevance for perceptual symbol systems. Most important, however, this process does not satisfy the definition of amodal symbols in section 1.2. To the extent that Gabora’s learning process occurs in the original units that represent perception, it implements a perceptual symbol system, even if generic attractors evolve over learning. As described in section 2.5 and shown in Figure 3, repetitions of the same perceptual symbol are superimposed to produce increasingly generic components of frames. Conversely, Gabora’s learning process would only become an amodal symbol system if a population of nonperceptual units were to recode perceptual activity, as in a feedforward net with hidden units (sect. 1.2). As long as no such recoding takes place, however, I view Gabora’s proposal as highly compatible with the proposal that abstractions are grounded perceptually. Schwartz et al. make virtually the same proposal as Gabora but view it as an implementation of a perceptual symbol system, not as an implementation of an amodal symbol system. R1.6. General skepticism. The commentaries by Landau and Ohlsson raise specific criticisms of perceptual symbol systems that I will address later. Here I address their general belief that amodal symbol systems must be correct. Landau and Ohlsson both view perceptual symbol systems as essentially a classic empiricist theory. Because they believe that prior empiricist theories have been completely discredited, they believe that perceptual symbol systems could not possibly work either. First, these commentaries prove the point I made in the target article that perceptual approaches to knowledge are often oversimplified for the sake of criticism (sect. 1.3). I do not have the sense that Landau and Ohlsson have appreciated this point, or have made much of an attempt to entertain anything but a simplistic perceptual view. I continue to believe that when more substantial and ambitious perceptual theories are entertained, they are not only plausible and competitive, they are superior. Second, the theory in the target article departs significantly from classic empiricist theories. Whereas prior theories focused on consciously experienced images, perceptual symbol systems focus on neural states in sensory-motor systems (sect. 2.1). To my mind, this is the most exciting feature of this proposal, primarily because it leads one to think about the perceptual view in provocative new ways. Third, I explicitly stated that my theory is not a simple empiricist view of knowledge, appealing to genetically regulated mechanisms that structure brain development (sect. 2.4.1). In particular, I proposed that genetically based mechanisms for space, objects, movement, and emotion organize perceptual symbols. Landau and Ohlsson seem to believe that nativism is the answer to the ills of empiricism. However, it is one thing to view nativism naively as a panacea for every difficult problem, and quite another to specify the complex epigenetic processes that characterize the genetic regulation of development (Elman et al. 1996). Fourth, perceptual symbol systems go considerably further than previous theories in laying out a fully functional theory of concepts, largely because the criteria for such a theory were not known until modern times. To my knowledge, no prior perceptual theory of knowledge has accomplished this. Although prior theories have included mechanisms that had potential for implementing these functions (sect. 1.3), they were typically not developed in these ways. Finally, in embracing amodal symbol systems as the only reasonable alternative, Landau and Ohlsson fail to address the problems raised for amodal symbol systems in section 1.2. Without resolving these major problems, it is difficult to see how anyone could have such complete confidence in a theory. R2. Specifying perceptual symbol systems The majority of the commentaries addressed specific aspects of perceptual symbol systems. Some commentaries attempted to show that empirical findings or theoretical arguments support some aspect of the theory. Others developed a particular aspect in a positive manner. Still other commentaries attempted to undermine the theory by raising problems for some aspect. I have organized these remarks around each aspect of the theory addressed. For each, I have combined the relevant commentaries with the hope of clarifying and developing that part of the theory. R2.1. Perception, perceptual symbols, and simulation. Quite remarkably, Aydede claims that there is no way to distinguish perceptual and nonperceptual systems independently of modal and amodal symbols. Given the huge behavioral and neural literatures on perception, this is a rather surprising claim. A tremendous amount is known about the perceptual systems of the brain independent of anything anyone could possibly say about conceptual systems. Clearly, we know a lot about perceptual abilities and the neural mechanisms that implement them. Identifying these Response/Barsalou: Perceptual symbol systems 640 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 abilities and mechanisms independently of perceptual symbols is not only possible but has long since been accomplished in the scientific disciplines that study perception. In defining perceptual symbols, I simply used these well-recognized findings. Thus, section 2.1 proposed that well-established sensory-motor mechanisms not only represent edges, vertices, colors, and movements in perception but also in conception. Perceptual symbols follow the well-trodden paths of perception researchers in claiming that the representational mechanisms of perception are the representational mechanisms of knowledge. What is perceptual about perceptual symbol systems is perception in the classic and well-established sense. The exceptions are introspective symbols (sect. 2.3.1), whose neural underpinnings remain largely unexplored (except for emotion and motivation). Brewer similarly worries that perceptual symbols, neurally defined, are of little use because we currently have no knowledge of the neural circuits that underlie them. Again, however, we do have considerable knowledge of the neural circuits that underlie sensory-motor processing. Because perceptual symbols are supposed to use these same circuits to some extent, we do know something about the neural configurations that purportedly underlie them. Although perceptual symbols use perceptual mechanisms heavily, this does not mean that conceptual and perceptual processing are identical. In the spirit of Lowenthal’s commentary, important differences must certainly exist. As noted in sections 2.1 and 4.3 of the target article, perceptual symbols probably rely more heavily on memory mechanisms than does standard perception. Furthermore, the thesis of section 2.4.7 is that a family of representational processes capitalizes on a common set of representational mechanism in sensory-motor regions, while varying in the other systems that use them. This assumption is also implicit throughout the subsections of section 4.1, which show how basic cognitive abilities could draw on the same set of representational mechanisms. The point is not that these abilities are identical, only that they share a common representational basis. Mitchell & Clement claim that I never define “simulation” adequately. It is clear throughout the target article, however, that a simulation is the top-down activation of sensory-motor areas to reenact perceptual experience. As section 2.4 describes in some detail, associative areas reactivate sensory-motor representations to implement simulations. Wells claims that the extraction of perceptual symbols (sect. 2.2) is not an improvement over the unspecified transduction process associated with amodal symbols (sect. 1.2). The improvement, as I see it, is that the extraction of perceptual symbols is a well-specified process. It is easy to see how selective attention could isolate and store a component of perception, which later functions as a perceptual symbol. In contrast, we have no well-specified account of how an amodal symbol develops. Finally, I would like to reiterate one further point about my use of “perception.” The use of this term in “perceptual symbol systems” is not meant to imply that movements and action are irrelevant to knowledge. On the contrary, many examples of perceptual symbols based on movements and actions were provided throughout the target article. Rather, “perception” refers to the perception of experience from which perceptual symbols are extracted, including the perception of movement and action. Thus, “perception” refers to the capture of symbolic content, not to the content itself. For additional discussion of perceptual representations in knowledge, see Brewer and Pani (1983) and Pani (1996). R2.2. Features and attention. One could take Wells’s concern about perceptual symbol extraction being unconstrained to mean that it provides no leverage on the problem of what can be a feature. Landau makes this criticism explicitly. Notably, the amodal account that Landau apparently prefers places no constraints on features at all. As the introduction to section 2 in the target article noted, amodal theories have not resolved the feature issue themselves. It is puzzling that what counts as evidence against the perceptual view is not also counted as evidence against the amodal view. Furthermore, as Landau notes, perceptual views contain the seeds of a solution to this problem. Specifically, she notes that classic empirical theories define conceptual features as features that arise from the interaction of the sense organs with the physics of the environment. In other words, the features of conception are the features of perception. Clearly, not all sensory distinctions make their way to the conceptual level, for example, opponent color processing and center-surround inhibition. Yet many conceptual features do appear to have their roots in sensory systems, including the features for shape, color, size, and texture that occur frequently in reports of conceptual content. Although Landau does not credit perceptual symbol systems with drawing on perception for features, they certainly do, given the explicit mention of perceptually derived features in section 2.1. Because of its complexity, I did not attempt to resolve the feature problem in the target article. The features of conception clearly go beyond perceptual features, as I noted in Note 5. Most important, high-level goals and conceptual structures capture aspects of perception that become features through selective attention. In most cases, the information selected is not a feature computed automatically in sensory processing but a higher-order configuration of basic sensory-motor units. It was for this reason that I cited Schyns et al. (1988) as showing that conceptual features clearly go beyond perceptual features. The architecture of perceptual symbol systems is ideally suited to creating new features that exceed sensory-motor features. Because high-level frames and intuitive theories are represented as perceptual simulations (sect. 2.5 and 4.1.2), they are well suited for interacting with perception to isolate critical features. Furthermore, the symbol formation process in sect. 2.2 is well suited for creating new features through selective attention. A great deal clearly remains to be specified about how this all works. It is much less clear how amodal symbol systems accomplish the construction of novel features, again because they lack an interface between perception and cognition. Schwartz et al. argue that I fail to specify how attention is controlled during the formation of perceptual symbols, such that perceptual symbols are as arbitrary as amodal symbols. Again, however, perceptual systems provide natural constraints on the formation of perceptual symbols. One prediction is that when empirical studies examine this process, they will find that perceptual symbols exhibit a weak bias for following the structure of perception. By no means, however, must perceptual symbols follow only this structure. Because selective attention is so flexible, it can focus on complex configurations of perceptual information Response/Barsalou: Perceptual symbol systems BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 641 to establish symbols that serve higher goals of the system. The open-endedness of the symbols that people use should not be viewed as a weakness of perceptual symbol systems. Instead, it is something that any theory needs to explain, and something that perceptual symbol systems explain through the flexible use of selection attention (sect. 2.2). It would be much more problematic if the theory were simply limited to symbols that reflect the computations of lowlevel sensory-motor processing. Ultimately, Schwartz et al. have another agenda. They argue that schematic category representations result from recruitment and fractionation, not from selective attention. It is implicit in their view that holistic perceptual states are recorded during learning, with differences across them canceling out to produce schematic representations of perceptual symbols. Similarly, Gabora argues for the importance of these mechanisms. Although I endorse these mechanisms, I strongly disagree with the view that selective attention plays no role in learning. As discussed in sections 2.2 and 4.1.3, selective attention has been strongly implicated in learning across decades of research in many areas. Furthermore, it is naive to believe that holistic recordings of perception occur, as Hochberg argues eloquently (also see Hochberg 1998). Being highly flexible does not make selective attention irrelevant to conceptual processing. Instead, it is absolutely essential for creating a conceptual system (sects. 1.4 and 4.1.3). Hochberg and Toomela each propose that active information-seeking and goal pursuit guide selective attention and hence the acquisition of perceptual symbols. Glenberg’s arguments about the importance of situated action are also consistent with this view. I agree strongly that topdown goal-achievement often guides selective attention, such that perceptual information relevant to these pursuits becomes acquired as perceptual symbols. However, bottom-up mechanisms may produce perceptual symbols, as well. For example, attention-grabbing onsets and changes in perception may attract processing and produce a perceptual symbol of the relevant content. Similarly, the perceptual layout of a physical stimulus may guide attention and the subsequent extraction of perceptual symbols. In this spirit, Brewer notes that physical proximity better predicts recall order for the objects in a room than does their conceptual relatedness. Thus, bottom-up as well as topdown factors are probably important to selecting the content of perceptual symbols. Furthermore, appealing to the importance of top-down factors pushes the difficult issues back a level. Attention is clearly under the control of goals and action to a considerable extent, yet how do we characterize the mechanisms underlying goals and actions that guide attentional selection? R2.3. Abstraction, concepts, and simulators. I am the first to admit that specifying the mechanisms underlying simulators constitutes perhaps the most central challenge of the theory, and I noted this throughout the target article. Later I will have more to say about implementing simulators in specific mechanisms (sect. R5.3). Here I focus on some misunderstandings about them. Without citing any particular statement or section of the target article, Ohlsson claims that I equated selection with abstraction. According to him, I believe that selectively storing a wheel while perceiving a car amounts to creating an abstraction of cars in general. Nowhere does the target article state that selection is abstraction. On the contrary, this is a rather substantial and surprising distortion of my view. What I did equate roughly with selection was schematization, namely, the information surrounding focal content is filtered out, leaving a schematic representation of the component (sect. 2.2). However, I never claimed that a schematization is an abstraction, or that it constitutes a concept. Rather, I argued that the integration of many schematic memories into a simulator is what constitutes a concept (sect. 2.4). Actually, I never said much about abstraction per se but focused instead on the development of types (sects. 2.4.3. and 3.2.1). If I were to characterize abstraction in terms of perceptual symbol systems, I would define it as the development of a simulator that reenacts the wide variety of forms a kind of thing takes in experience. In other words, a simulator contains broad knowledge about a kind of thing that goes beyond any particular instance. Adams & Campbell claim that I characterize simulators in terms of the functions they perform, such as categorization and productivity. Most basically, however, I defined simulators as bodies of knowledge that generate top-down reenactments or simulations of what a category is like (sect. 2.4). Clearly, this is a functional specification, but it is not the one that Adams & Campbell attribute to me. Adams & Campbell further complain that I fail to say anything about the mechanisms that underlie simulators (so do Dennett & Viger and Schwartz et al.). In section 2.4, however, I defined a simulator as a frame plus the mechanisms that operate on it to produce specific simulations. In section 2.5, I provided quite a bit of detail on the frames that underlie simulators. I acknowledged that this account is far from complete, and I agree that developing a full-blown computational theory is extremely important. For the record, though, I certainly did say something about the mechanisms that underlie simulators, and I continue to believe that these ideas provide useful guidance in developing specific accounts (sect. R5.1). Landau believes that simulations are not capable of capturing the underlying content or abstractions of concepts. If I understand correctly, she is making the same argument Adams & Campbell made earlier that perceptual knowledge is epiphenomenal and not constitutive of concepts. In a sense, this is also similar to Ohlsson’s argument that storing part of a category instance does not represent the respective category. Again, however, the claim is that an entire simulator – not a specific perceptual representation – comes to stand for a category. As just discussed, a simulator captures a wide variety of knowledge about the category, making it general, not specific. Furthermore, the abstraction process described by Gabora and by Schwartz et al. enters into the frames that underlie simulators, thereby making them generic (sects. R1.5 and R2.4). Finally, a simulator stands in a causal relation to its physical category in the world (sect. R1.2). On perceiving an instance of category, a simulator is causally activated, bringing broad category knowledge to bear on the instance (sects. 2.4.3 and 3.2.1). For all these reasons, simulators function as concepts. Finally, Siebel takes issue with my claim that bringing the same simulator to bear across different simulations of a category provides stability among them (sect. 2.4.5). Thus, I claimed that the different simulations a person constructs of birds are unified because the same simulator for bird produced all of them. As Siebel correctly notes, each time Response/Barsalou: Perceptual symbol systems 642 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 a simulator becomes active, the perceptual symbols processed becomes stored or strengthened, thereby changing the simulator’s content. Thus, the same simulator – in terms of its content – cannot be brought to bear on different simulations to provide stability. “Same,” however, does not refer to content but to the body of knowledge that stands in causal relation historically to a physical category. Stability results from the fact that one particular body of knowledge typically becomes active when a particular category is conceptualized, even though that body of knowledge changes in content over time. Because all conceptualizations can be linked to this particular body of knowledge historically, they gain stability. R2.4. Conceptual essences and family resemblances. Lurking behind the concern that simulators cannot represent concepts may well be the belief that concepts have essences. When Ohlsson, Landau, and Adams & Campbell question whether perceptual representations capture underlying constitutive content, I take them to be worrying about this. Since Wittgenstein (1953), however, there have been deep reservations about whether concepts have necessary and sufficient features. For this reason, modern theories often portray essences as people’s naive belief that necessary and sufficient features define concepts, even when they actually do not (e.g., Gleman & Diesendruck, in press). Should a concept turn out to have defining features, though, a simulator could readily capture them. If a feature occurs across all the instances of a category, and if selective attention always extracts it, the feature should become well established in the frame that underlies the simulator (sect. 2.5, Fig. 3; sect. R1.5; Gabora; Schwartz et al.). As a result, the feature is almost certain to become active later in simulations of the category constructed. Simulators have the ability to extract features that are common across the instances of a category, should they exist. Research on the content of categories indicates that few features if any are typically common to all members of a category (e.g., Malt 1994; Malt et al., in press; Rosch & Mervis 1975). However, a small subset of features is likely to be true of many category members, namely, features that are characteristic of the category but not defining. Should a family resemblance structure of this sort exist in a category, perceptual symbol systems are quite capable of extracting it. Again, following the discussion of frames (sect. 2.5, Fig. 3) and abstraction (sects. R1.5 and R2.3), the more often a feature is extracted during the perceptual processing of a category, the better established it becomes in the category’s frame. To the extent that certain features constitute a statistical regularity, the frame for the category will capture this structure and manifest it across simulations constructed later. The more a feature is processed perceptually, the more it should occur in category simulations. Finally, Ohlsson seems to believe that a perceptual view of concepts could only work if common perceptual features underlie all instances of a concept. Nowhere did I claim this, and there is no reason it must be true. As just described, the frame that underlies a simulator captures the statistical distribution of features for a category, regardless of whether it possesses common features. For a category lacking common features, its simulator will produce simulations that do not share features with all other simulations but are related instead by a family resemblance structure. Fauconnier raises the related point that a word may not be associated with a single simulator, taking me to mean that a simulator only produces simulations that share common features. Again, however, there is no a priori reason that a simulator cannot produce disjunctive simulations. Because different perceptions may be associated with the same word, different perceptual symbols may be stored disjunctively in the frame of the associated simulator and may later produce disjunctive simulations. As Fauconnier suggests, the content of a simulator is accessed selectively to project only the information relevant in a particular context, with considerably different information capable of being projected selectively from occasion to occasion (sect. 2.4.3). R2.5. Frames and productivity. Barsalou and Hale (1993) propose that all modern representation schemes evolved from one of two origins: propositional logic or predicate calculus. Consider some of the differences between these two formalisms. Most basically, propositional logic contains binary variables that can be combined with simple connectives, such as and, or, and implies, and with simple operators, such as not. One problem with propositional logic is its inability to represent fundamentally important aspects of human thought and knowledge, such as conceptual relations, recursion, and bindings between arguments and values. Predicate calculus remedied these problems through additional expressive power. Barsalou and Hale show systematically how many modern representation schemes evolved from propositional logic. By allowing binary variables to take continuous forms, fuzzy logic developed. By replacing truth preservation with simple statistical relations between variables, the construct of a feature list developed, leading to prototype and exemplar models. By adding complicated activation and learning algorithms to prototype models, connectionism followed. Notably, neural nets embody the spirit of propositional logic because they implement simple continuous units under a binary interpretation, linked by simple connectives that represent co-occurrence. In contrast, consider representation schemes that evolved from predicate calculus. Classic frame and schema theories maintain conceptual relations, binding, and recursion, while relaxing the requirement of truth preservation. Perceptual symbol systems similarly implement these functions (sect. 2.5.1), while relaxing the requirement that representations be amodal or language-like. It is almost universally accepted now that representation schemes lacking conceptual relations, binding, and recursion are inadequate. Ever since Fodor and Pylyshyn’s (1988) classic statement, connectionists and dynamic systems theorists have been trying to find ways to implement these functions in descendants of propositional logic. Indeed, the commentaries by Edelman & Breen, Markman & Dietrich, and Wells all essentially acknowledge this point. The disagreement lies in how to implement these functions. Early connectionist formulations implemented classic predicate calculus functions by superimposing vectors for symbolic elements in predicate calculus expressions (e.g., Pollack 1990; Smolensky 1990; van Gelder 1990). The psychological validity of these particular approaches, however, has never been compelling, striking many as arbitrary technical attempts to introduce predicate calculus functions into connectionist nets. More plausible cognitive accounts that rely heavily on temporal asynchrony have been sugResponse/Barsalou: Perceptual symbol systems BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 643 gested by Shastri and Ajjanagadde (1993) and Hummel and Holyoak (1997). My primary concern with this approach is that it is amodal and therefore suffers from the problems noted in section 1.2. Edelman & Breen suggest that spatial relations provide considerable potential for implementing conceptual relations, binding, and recursion. Furthermore, they note that these functions achieve productivity through the combinatorial and recursive construction of simulations. As sections 2.5 and 3.1 indicate, I agree. Perceptual symbol systems provide more than a means of representing individual concepts; they provide powerful means of representing relations between them. My one addition to Edelman & Breen’s proposal would be to stress the importance of temporal relations – not just spatial relations – as the basis of frames and productivity. The discussion of ad hoc categories in section 3.4.4 illustrates this point. One can think of a simulation as having both spatial and temporal dimensions. For example, a simulation of standing on a chair to change a light bulb contains entities and events distributed over both space and time. As suggested in section 3.4.4, ad hoc categories result from disjunctively specializing spacetime regions of such simulations. Thus, the entity that is stood on to change a light bulb constitutes a space-time region, which can be specialized with simulations of different objects. Similarly, the agent standing on the chair can be specialized differently, as can the burned-out light bulb, the new light bulb, the fixture, and so forth. Each region that can be specialized constitutes a potential attribute in the frame for this type of event. Note that such regions are not just defined spatially, they are also defined temporally. Thus, the regions for the burned out bulb, the new bulb, and the chair all occupy different regions of time in the simulation, not just different regions of space. Because there are essentially an infinite number of space-time regions in a simulation, there are potentially an infinite number of frame attributes (Barsalou 1992; 1993). Markman & Dietrich observe that frame structure arises in perceptual symbol systems through nativist mechanisms. With appropriate acknowledgment of epigenesis (Elman et al. 1996) and eschewing genetic determinism, I agree. Frame structure arises from the basic mechanisms of perception. Indeed, I have often speculated informally that predicate calculus also originated in perception. Markman & Dietrich further speculate that language is not the origin of frame structure. Conversely, one might speculate that the frame-like structure of language originated in perception as well. Because perception has relations, binding, and recursion, language evolved to express this structure through verbs, sentential roles, and embedded clause structure, respectively. Finally, Markman & Dietrich suggest that frame structure is not learned, contrary to how certain learning theorists might construe its origins in neural nets. I agree that the basic potential for frames lies deep in the perceptual architecture of the brain. No doubt genetic regulation plays an important role in producing the attentional, spatial, and temporal mechanisms that ultimately make frames possible. Yet, I hasten to add that specific frames are most certainly learned, and that they may well obey connectionist learning assumptions during their formation. Again, though, these connectionist structures are not amodal but are instead complex spatio-temporal organizations of sensory-motor representations. Finally, Wells agrees that frame structure is important. Without providing any justification, though, he claims that only amodal symbol systems can implement productivity, not perceptual symbol systems. Perhaps perceptual symbol systems do not exhibit exactly the same form of productivity as amodal symbol systems, but to say that they do not implement any at all begs the question. As section 3.1 illustrated in considerable detail, the hierarchical composition of simulations implements the combinatoric and recursive properties of productivity quite clearly. Wells then proceeds to argue that productivity does not reside in the brain but resides instead in interactions with the environment. This reminds me of the classic behaviorist move to put memory in the environment as learning history, and I predict that Wells’s move will be as successful. I sympathize with Wells’s view that classic representational schemes have serious flaws. As Hurford notes, however, it is perhaps unwise to throw out the symbolic baby with the amodal bath water. There are many good reasons for believing that the brain is essentially a representational device (Dietrich & Markman, in press; Prinz & Barsalou, in press b). Some theorists have gone so far as arguing that evolution selected representational abilities in humans to increase their fitness (Donald 1991; 1993). Perceptual symbol systems attempt to maintain what is important about representation while similarly attempting to maintain what is important about connectionism and embodied cognition (sect. R6). The brain is a statistical, embodied, and representational device. It is our ability to represent situations offline, and to represent situations contrary to perception, that make us such amazing creatures. R2.6. Analogy, metaphor, and complex simulations. As noted by Markman & Dietrich, the ability of perceptual symbol systems to represent frames makes it possible to explain phenomena that require structured representations, including analogy, similarity, and metaphor. Indurkhya further notes that, by virtue of being perceptual, frames provide powerful means of explaining how truly novel features emerge during these phenomena. In a wonderful example, he illustrates how blending a perceptual simulation of sunlight with a perceptual simulation of the ocean’s surface the emergent feature of harp strings vibrating back and forth. Indurkhya argues compellingly that perceptual symbol systems explain the emergence of such features much more naturally than do amodal symbol systems. In this spirit, he suggests that conceptual blending be called “perceptual blending” to highlight the importance of perceptual representations, and he cites empirical evidence showing that perception enters significantly into metaphor comprehension. Fauconnier notes that I fail to acknowledge the complexity of simulations in language and thought. He provides a compelling example of a child who uses sugar cubes and matchbooks to simulate cars and buses driving down the street. Clearly, such examples involve complicated simulations that reside on multiple levels, and that map into one another in complex ways. Indeed, this particular example pales in complexity next to Fauconnier and Turner’s (1998) examples in which people juxtapose perception of the current situation with past, future, and counterfactual situations. In these examples, people must represent several situations simultaneously for something they say to make sense. To my mind, the fact that the human conceptual system can represent such complicated states-of-affairs illusResponse/Barsalou: Perceptual symbol systems 644 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 trates how truly remarkable it is. Brewer’s observation that simulations provide a good account of how scientists represent models further illustrates this ability. The evolution of the human frontal lobes provides one way to think about people’s ability to construct multiple simulations simultaneously and to map between them. Increasingly large frontal lobes may have provided greater inhibitory power to suppress perception and to represent nonpresent situations (cf. Carlson et al. 1998; Donald 1991; 1993; Glenberg et al. 1998). Taking this idea a little further provides leverage in explaining humans’ ability to represent multiple nonpresent situations simultaneously. Not only can humans simulate absent situations, we can simulate several absent situations simultaneously and map between them. The possession of a powerful inhibitory and control system in the frontal lobes may well make this possible. R2.7. Introspection, emotion, and metacognition. Two commentators found my inclusion of introspective processes problematic and unparsimonious (Newton, Toomela). Ultimately, they worry about the mechanisms that perceive introspective events. In place of introspection, Newton suggests that we seek subtle proprioceptive events that ground intuitive understandings of introspection. For example, experiences of collateral discharge while executing movements might ground the intuitive concept of agency, whereas experiences of eye movements might ground the intuitive concept of seeing. Similarly, Toomela suggests that introspective phenomena arise through interactions of more basic systems, such as perception and language. Although I am sympathetic to these proposals and could imagine them being correct to some extent, I am not at all convinced that it is either necessary or wise to explain all introspection in these ways. Most significantly, people clearly experience all sorts of internal events, including hunger, fatigue, and emotion. Finding ways to ground them in more externally-oriented events seems difficult, and denying their internal experience seems highly counterintuitive. Another problem for Newton and Toomela is explaining the central role that the representation of mental states plays in modern developmental and comparative psychology. Research on theory of mind shows that people represent what they and others are representing (or are not representing) (e.g., Hala & Carpendale 1997). A primary argument to emerge from this literature is that representing mental states is a central ability that distinguishes humans from other species (Tomasello & Call 1997). If we do away with introspection, how do we represent minds in a theory of mind? Also, how do we explain the wide variety of metacognitive phenomena that people exhibit (Charland)? Regarding the problem of what mechanisms perceive representing, I disagree with Newton and Toomela that this leads to an infinite regress whereby additional mechanisms to perceive representations must be added to the basic mechanisms that perceive the world. Rather, I suspect that the brain uses one set of mechanisms to perceive both the world and representations. Recall the basic premise of perceptual symbol systems: Top-down simulations of sensory-motor systems represent knowledge, optionally producing conscious states in the process (sect. 2.1). Perhaps the mechanisms that produce conscious experience of sensory-motor systems when driven by bottom-up sensory processing also produce consciousness of the same systems when driven by top-down activation from associative areas (sect. 2.4.7). No new mechanisms are necessary. The one critical requirement is knowing the source of the information that is driving conscious experience. In psychosis, this awareness is lacking, but most of the time, a variety of cues is typically sufficient, such as whether our eyes are closed (Glenberg et al. 1998), and the vividness of the experience (Johnson & Raye 1981). I agree strongly with Charland that emotion is central to perceptual symbol systems, and that emotion’s role in this framework must be developed further. I also agree that there may well be specific brain circuits for processing particular emotions, which later support conceptualizations of these emotions (e.g., circuits for fear and anxiety; Davis 1998). Where I disagree with Charland is in his apparent claim that emotion is a self-contained symbol system. I further disagree with his readings of Damasio (1994) and Lazarus (1991) that perceptual images and appraisal mechanisms belong to an autonomous symbol system for emotion. This is certainly not my reading of their work. On the contrary, I strongly suspect that these cognitive aspects of emotion are provided directly by cognitive – not emotion – mechanisms. Rather than containing its own cognitive system, I suspect that emotion mechanisms complement a separate cognitive system. I hasten to add, however, that these two systems are so tightly coupled that dissociating them creates a distortion of how each one functions alone (Damasio 1994). Also, reenacting states in emotion systems – not amodal systems in a general knowledge store – constitutes the basis of representing emotions symbolically in conceptual processing. Finally, Oehlmann presents two metacognitive phenomena and asks how perceptual symbol systems explain them. First, how do people know that they know the solution to a problem without having to simulate the entire solution? Selective attention and schematization provide one account (sect. 2.2). On simulating a solution, selective attention stores the initial and final states in working memory, dropping the intermediate states. By then switching back and forth between the initial and final states, an abbreviated version of the simulation becomes associated with the entire simulation. On later perceiving the initial conditions for these simulations, both become active, with the abbreviated one finishing much sooner. If the final state in the abbreviated simulation is sufficient to produce a response, waiting for the complete simulation to finish is unnecessary. As this example illustrates, selective attention and schematicity provide powerful editing functions on simulations, just as in productivity, making it possible to rise above complete simulations of experience (sect. 3.1). Again, a perceptual symbol system is not a recording system. Oehlmann’s second problem is how perceptual symbol systems explain performance on the false belief task. Although children can be shown to recognize that another person has a different belief than they do, they nevertheless forget this at times and attribute their own belief to the other person. Oehlmann claims that if beliefs were simply simulated, this pattern of results should not occur. If children can simulate another person’s belief correctly on one occasion, why do they not simulate it correctly on all occasions? An obvious explanation is that children at this age are limited in their ability to simulate other people’s mental states. Under optimal conditions, they have enough cognitive resources to run different simulations for themselves Response/Barsalou: Perceptual symbol systems BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 645 and others. Under less than optimal conditions, however, they run out of resources and simply run the same simulation for others as they do for themselves. As children grow older and their ability to simulate other minds automatizes, they almost always have enough resources to simulate differing states of mind, and they rarely fall back on the projective strategy. Indeed, this is similar to Carlson et al.’s (1998) proposal that frontal lobe development is the key resource in performing these tasks. When task demands are too high, children do not have enough resources to inhibit simulations of their own mental state when attempting to simulate the mental states of others. R2.8. Situated action. I agree with Glenberg that if a perceptual symbol system fails to support situated action, it is a limited and misguided enterprise. Whatever form the cognitive system takes, it most certainly evolved to support situated action (e.g., Clark 1997; Glenberg 1997; MacWhinney 1998; Newton 1996). In the target article, I focused primarily on how perceptual symbol systems achieve classic symbolic functions, given that this ability has not been appreciated. In the process, I failed to explore how perceptual symbol systems support situated action. I have attempted to remedy this oversight in a more recent paper, arguing that perceptual simulation is an ideal mechanism for monitoring and guiding situated action (Barsalou, in press). Freksa et al. similarly argue that perceptual simulation is well suited for interfacing cognition with action in the world. Because cognition and perception use a common representational system, cognition can be readily linked with perception and action. Internal models have sufficient perceptual fidelity to the physical world to provide accurate and efficient means of guiding interactions with it. Again, though, causal relations, not just resemblance, are important in establishing these relations (sects. 2.4.3, 3.2.8, and R1.2). R2.9. Development. Two commentaries note that classic developmental theories anticipate the importance of perceptual symbol systems in cognition. Mitchell & Clement remind us that Piaget had a lot to say about the development of cognition from sensory-motor processing. Toomela reminds us that Vygotsky and Luria had a lot to say, as well. I agree completely and did indeed neglect these theorists in section 4.2, primarily because Nelson’s (1996) book, which I did cite, does a good job of describing these previous developmental theories as a springboard for her recent theory. McCune is concerned about my statement that “the same basic form of conceptual representation remains constant across . . . development, and a radically new form is not necessary.” In particular, she notes that children’s ability to use representations symbolically develops considerably over the first two years of life. As children mature, frontal lobe development supports increasingly powerful abilities to formulate and manipulate representations offline from perception. I agree completely (sects. R2.6 and R2.7). In the quote above, however, I was not referring to representation in this sense of symbolic activity. Instead, I was arguing that a transition from a perceptual symbol system to an amodal symbol system does not occur over the course of development. Instead, a single perceptual symbol system develops, achieving critical symbolic milestones along the way. I suspect, however, that even the rudiments of more advanced representational abilities are present very early. As I speculated in section 3.4.3, prenatal infants may compare expectations to sensory experiences as a precursor to the operations that underlie the adult concepts of truth and falsity. As McCune suggests, however, these early precursors are certainly much more primitive than their later counterparts. R.3. Abstract concepts The topic raised most frequently by the commentators was abstract concepts. As I noted in section 3.4, accounting for these concepts in perceptual symbol systems is a controversial and challenging task. R3.1. Abstract concepts have nothing to do with perception. Arguing that abstract concepts rise above perception, Landauer and Ohlsson view perception as a nuisance and hindrance to representing these concepts. Neither, however, tells us what abstract concepts are. If they are not about events we perceive, then what are they about? Lacking a compelling account of their content, how can we claim to understand them? By “content” in this discussion, I will mean the cognitive representations of abstract concepts, because this is the type of content at issue here. By no means, however, do I intend to imply that physical referents are unimportant. As described earlier for the causal theory of concepts, I strongly agree that they can be central (sect. R1.2). In section 3.4.2, I proposed a strategy for representing abstract concepts in perceptual symbol systems: (1) identify the event sequences that frame an abstract concept; (2) specify the relevant information across modalities in these sequences, including introspection; and (3) specify the focal content in these sequences most relevant to defining the concept. Most basically, this strategy requires identifying the entities and events to which an abstract concept refers. That is all. Indeed, it would seem to be the same strategy that one should follow in developing amodal accounts of abstract concepts, as well. Regardless of the specific approach taken, it is essential to identify the content of an abstract concept. Once one identifies this content, the issue then arises of how to represent it. One possibility is to transduce it into amodal symbols. Another possibility is to simulate it in the spirit of perceptual symbol systems. Regardless, it is essential to identify the content of an abstract concept. Just postulating the concept’s existence, or using a language-based predicate to represent it, fails to accomplish anything. There is nothing useful in claiming that TRUE (X) and FALSE (X) represent the abstract concepts of truth and falsity. Obviously, specifying the content that distinguishes them is essential. Once this has been accomplished, my claim is simply that simulations of this content are sufficient to represent it – there is no need to transduce it into amodal symbols. Another key issue concerns what types of content are potentially relevant to representing abstract concepts. Throughout the target article, but especially in section 3.4.2, I argued that it is impossible to represent abstract concepts using content acquired solely through sensation of the physical world. On the contrary, it is essential to include content from introspection. Whereas concrete concepts are Response/Barsalou: Perceptual symbol systems 646 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 typically concerned only with things in the world, abstract concepts are about internal events, as well. Thus, I agree that abstract concepts cannot be explained solely by perception of the external world, and my frequent argument has been that this additional content arises in introspection. Unlike Landauer and Ohlsson, I have specified where this content is supposed to originate. I further predict that should they attempt to characterize this content themselves, they will ultimately find themselves appealing to introspective events. Most important, I predict that simulations of these events will be sufficient to represent them and that amodal symbols will not be necessary. R3.2. Abstract concepts of nonexistent entities. Several commentators wondered how perceptual symbol systems could represent something that a person has never experienced. If, as I just suggested, the representation of an abstract concept is the reenactment of its content, how can we ever represent an abstract concept whose referents have never been encountered? How do we represent the end of time, life after death, infinite set, and time travel (Ohlsson)? How do we represent electromagnetic field and electron (Toomela)? It is instructive to consider concrete concepts that we have never experienced, such as purple waterfall, cotton cat, and buttered chair. Although I have never experienced referents of these concepts, I have no trouble understanding them. Using the productive mechanisms of perceptual symbol systems. I can combine components of past experience to construct novel simulations of these entities (sect. 3.1.2). I can simulate a normal waterfall and transform its color to purple; I can simulate a normal cat and transform its substance to cotton; I can simulate a normal chair and simulate buttering it. If I can construct such simulations for concrete concepts, why can I not construct them for abstract concepts? For the end of time, I can begin by simulating the ends of known processes, such as the end of growth and the end of walking. In all such cases, a process occurring over a stretch of time stops. Applying the same schematic simulation to time yields an interpretation of the end of time, even though we have not experienced it. Simple-mindedly, we might imagine that all clocks stop running. With a little more sophistication, we might imagine all change in the world ending, with everything freezing in place. Or, we might simply imagine all matter disappearing into nothing. By combining past experience productively, we can simulate events and entities that we have never experienced, regardless of whether they are concrete or abstract (sect. 3.1.2). If space permitted, I would be happy to provide similar accounts of the other examples raised. However, making even a half-hearted attempt to follow my proposed strategy for representing abstract concepts may well be sufficient for readers to convince themselves that it is quite possible to represent nonexperienced concepts with productive simulations (sect. 3.4.2). What is perhaps most troubling about raising nonexperienced concepts as a problem for perceptual symbol systems is the lack of awareness that these concepts constitute a challenge for amodal symbol systems, too. How do amodal approaches represent the content of these concepts? Once amodal theorists make the effort to identify this content, I again predict that perceptual symbols will provide a direct and parsimonious account. R3.3. Metaphor and abstract concepts. In reaction to my argument that metaphor does not form the primary basis of abstract concepts (sect. 3.4.1), Gibbs & Berg respond that metaphor does not just elaborate and extend abstract concepts – it is central to their core. In making their case, they cite empirical studies showing that metaphors structure abstract concepts and produce entailments in their representations. My intention was never to underestimate the importance of metaphor in abstract concepts, especially those for which people have no direct experience. I agree that metaphor is important. Instead, my intention was to redress cognitive linguists’ failure to consider the importance of direct experience in abstract concepts, and I remain convinced that direct experience is critically important. It remains an open empirical question just how much metaphor structures abstract concepts, and I suspect that the nature of this influence varies widely from concept to concept. Of the greatest importance, perhaps, is establishing a detailed process model of how direct experience and metaphor interact to represent abstract concepts over the course of their acquisition. R3.4. Situations and abstract concepts. Central to my proposed strategy for representing abstract concepts is identifying the background event sequences that frame them (sect. 3.4). Perhaps the greatest omission in the target article was my failure to cite empirical research on situations that support this conjecture. Much research by Schwanenflugel and her colleagues shows that abstract concepts are harder to process than concrete concepts when background situations are absent, but that both types of concepts are equally easy to process when background situations are present (e.g., Schwanenflugel 1991; Schwanenflugel et al. 1988; Schwanenflugel & Shoben 1983; Wattenmaker & Shoben 1987). Wiemer-Hastings (1998) and Wiemer-Hastings and Graesser (1998) similarly report that sentential contexts are much more predictive of abstract words than of concrete words, further indicating the importance of background situations for abstract concepts. In their commentary here, Wiemer-Hastings & Graesser offer excellent examples of how abstract concepts depend on situations for meaning. Understanding why abstract concepts depend so heavily on background situations is likely to be central for developing adequate accounts of them. R3.5. Representing truth, falsity, and negation. As section 3.4.3 noted, it was never my aim to provide complete accounts of these three concepts. Instead, I presented partial accounts to illustrate a strategy for representing abstract concepts in general (sect. 3.4.2), noting that more complete analyses would be required later. Landau complains that I did not really specify anything about the content of truth. This is indeed quite puzzling, given that the commentators I am about to discuss all believed that I had in fact specified such content. The problem, they thought, was that I had not specified it adequately. Two commentaries suggested that my account of truth cannot be distinguished from match (Adams & Campbell), or from similar, comparable, and looks like (Mitchell & Clement). Following the proposed strategy for representing abstract concepts in section 3.4.2, distinguishing these concepts begins by examining the situations in which they occur. In match, similar, comparable, and looks like, the relevant situation typically involves comparResponse/Barsalou: Perceptual symbol systems BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 647 ing two entities or events in the same domain of experience. Thus, one might compare two birds in the world or two imagined plans for a vacation. In none of these cases are two ontologically different things typically compared, such as an imagined perception and an actual one. Furthermore, subtle aspects of comparison distinguish these four concepts from one another. Thus, match implies that identical, or nearly identical, features exist between the compared objects, whereas similar, comparable, and looks like imply partially overlapping features. Within the latter three concepts, similar allows just about any partial match, comparable implies partial matches on aligned dimensions (Gentner & Markman 1997), and looks like implies partial matches in vision. In contrast, the sense of truth in section 3.4.3 and Figure 7a specifies that an agent attempts to map a mental simulation about a perceived situation into the perceived situation, with the simulation being true if it provides an adequate account of the perceived situation. Clearly, the components of this account differ from those just described for match, similar, comparable, and looks like. In truth, one entity (a mental simulation) is purported to be about a second entity (a perceived situation). In none of the other four concepts do the two entities being compared reside in different modalities, nor is one purported to be about the other. As we shall see, more content must be added to this account of truth to make it adequate. Nevertheless, enough already exists to distinguish it from these other concepts. Once one establishes such content, the key issue, of course, is deciding how to represent it. Following the amodal view, we could attempt to formulate complicated predicate calculus or programming expressions to describe it. Alternatively, we could attempt to identify the direct experience of this content in the relevant situations and argue that simulations of these experiences represent the content. Thus, over many occasions of assessing truth, a simulator develops that can construct specific simulations of what it is like to carry out this procedure. Siebel notes an omission in my account of truth in Figure 7a. How does it explain errors in mapping simulations to perceived situations? If I simulate a donkey and mistakenly bind it to a perceived horse, do I believe it true that the perceived entity is a donkey? If I have no disconfirming evidence, yes! Of course, this does not mean that my simulation is actually true, it simply means that I have concluded incorrectly that it is. Indeed, people often believe that an expectation is true, even before they compare it to the perceived world (Gilbert 1991). Where the account of truth in Figure 7a must be extended is to account for the discovery of disconfirming evidence. Following the proposed strategy for representing abstract concepts (sect. 3.4.2), simulations of disconfirmation must be added. Because many kinds of disconfirmation are possible, many event sequences are necessary. For example, further perception of the horse might activate additional visual features, which in turn activate the simulator for horse. After comparing simulations for horse and donkey to the perceived entity, I might decide that the original donkey simulation does not fit as well as the subsequent horse simulation, therefore changing what I believe to be true. Alternatively, another agent with more expertise than me might claim that the perceived entity is a horse, so that I attribute falsity to my original simulation and attribute truth to a new simulation from my horse simulator. By incorporating schematic simulations of disconfirmation into the simulator for truth, the account in Figure 7a can be extended to handle disconfirmation. It is simply a matter of implementing the strategy in section 3.4.2 to discover the relevant content. Ohlsson notes another omission in my account of truth. Rather than applying my proposed strategy to see if it yields the necessary content, however, he simply concludes that it cannot. The omission he notes is that lack of a fit does not necessarily mean a simulation is false. In his example, I see a cat on a mat in my office. While I am turned away, someone pulls the mat outside my office with the cat on it. When I turn around, the cat and the mat are gone. Ohlsson claims that my account of falsity in Figure 7b implies that it is false that the cat is on the mat. Actually, though, this is not what my account would say. As I stated repeatedly in the target article, a simulation is purported to be about a perceived situation (sect. 3.4.3). Thus, in Ohlsson’s example, my simulation of a cat on a mat is purported to be about my office. When this simulation no longer matches the perceived situation, one reasonable conclusion is that it is false that there is a cat on a mat in my office, not Ohlsson’s conclusion that it is false that the cat is on the mat. If the critical question were whether the cat is on the mat anywhere, then I would have to compare a simulation of a cat on a mat to many perceived situations until I found one containing the mat. This adds another important layer of content to my accounts of truth and falsity in Figure 7: On some occasions in which these concepts are used, it may be necessary to compare a simulation to more than one perceived situation. Only after failing to find a satisfactory fit in any relevant situation does a conclusion about falsity follow, assuming that a match in a single situation suffices for truth (i.e., in a universally quantified claim, a match would be necessary in every situation). The same solution handles Ohlsson’s other example about aliens. We do not conclude that there are no aliens in the universe after examining only one situation. Instead, we have to examine many possible situations before reaching this conclusion. Again, however, applying the strategy in section 3.4.2 yields the necessary content. By examining the appropriate situations, the relevant content can be discovered and added to the respective simulators. Fauconnier notes that a near miss is a better example of falsity than a far miss. On perceiving a balloon above a cloud, for example, a simulation of a balloon under a cloud is a better example of a false simulation than is a simulation of a typewriter on a desk. Such a prediction is borne out by an increasingly large literature which shows that high similarity counterintuitively produces high dissimilarity (Gentner & Markman 1997). The more alignable two representations are, the more similar they are, and the more different they are. Thus, I suspect that Fauconnier’s prediction has more to do with how people compute fits between representations than with my accounts of truth and falsity per se. His prediction not only applies to these concepts but to all of the related concepts discussed earlier, such as match, similar, comparable, and looks alike. Finally, Wiemer-Hastings & Graesser explore my point in section 3.4.3 that truth is a polysemous concept. In their commentary, they explore a variety of senses that I did not attempt to cover in Figure 7a. Besides highlighting the fact that most abstract concepts are highly polysemous, they raise the further issue that any theory of knowledge must Response/Barsalou: Perceptual symbol systems 648 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 differentiate the senses of an abstract concept. Using the concept of idea, Wiemer-Hastings & Graesser illustrate how situations may well provide leverage on this task. I suspect that the same strategy is also likely to provide leverage on the different senses of truth that they raise, such as the truth of a single utterance, the truth of all a person’s utterances, and scientists trying to discover the truth. As just described for falsity, differences in how simulations are assessed against perceived situations may distinguish the first two senses. In the first sense, a single simulation for one particular utterance must be compared to a single perceived situation. In the second sense, a simulation for each claim a person makes must be compared to each respective situation. To account for the scientific sense of truth, a completely new set of situations from the scientific enterprise is required, including the formulation of hypotheses, the conducting of research, the assessment of the hypotheses against data, and so forth. Only within this particular set of experiences does the scientific sense of truth make sense. Again, specifying the meaning of an abstract concept requires searching for the critical content in the relevant background situations. R3.6. How do you represent X? Various commentators wondered how perceptual symbols could represent all sorts of other abstract concepts. These commentaries fall into two groups. First, there was the commentator who followed my strategy in section 3.4.2 for representing abstract concepts and discovered that it does indeed provide leverage on representing them. Second, there were the commentators who complained that perceptual symbol systems fail to represent particular abstract concepts, yet who apparently did not try this strategy. At least these commentators do not report that this strategy failed on being tried. First, consider the commentator who tried the strategy. Hurford notes that the target article failed to provide an account of the concept individual. On examining relevant situations that contain familiar individuals, Hurford induced an account of this concept: A unique individual exists whenever we fail to perceive anything exactly like it simultaneously in a given situation. Thus, we conclude that our mother is a unique individual because we never perceive anyone exactly like her simultaneously. For the same reason, we conclude that the sun and moon are unique individuals. As Hurford further notes, mistakes about the multiple identities of the same individual follow naturally from this account. Because we perceive Clark Kent and Superman as each being unique, we never realize that they are the same individual (the same is true of the Morning Star and the Evening Star). Only after viewing the transition of one into the other do we come to believe that they are the same individual. Like my account of truth, Hurford’s account of individual may need further development. Regardless, it illustrates the potential of the strategy in section 3.4.2 for discovering and representing the content of abstract concepts. Finally, consider the commentators who doubted that perceptual symbol systems could represent particular abstract concepts, apparently without trying the strategy in section 3.4.2. Adams & Campbell expressed skepticism about chiliagons and myriagons; Brewer expressed skepticism about entropy, democracy, evolution, and because; Mitchell & Clement expressed skepticism about can, might, electricity, ignorant, and thing; Ohlsson expressed skepticism about health care system, middle class, recent election, and internet. Again, if there were no shortage of space, I would be happy to illustrate how the proposed strategy in section 3.4.2 might represent these concepts. Again, the content of an abstract concept must be specified, regardless of whether it is to be represented perceptually or amodally, and the proposed strategy is a useful means of identifying it. Furthermore, I remain convinced that simulations of this content are sufficient to represent it, and that amodal redescriptions are unnecessary.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Type-2 fuzzy set extension of DEMATEL method combined with perceptual computing for decision making

Most decision making methods used to evaluate a system or demonstrate the weak and strength points are based on fuzzy sets and evaluate the criteria with words that are modeled with fuzzy sets. The ambiguity and vagueness of the words and different perceptions of a word are not considered in these methods. For this reason, the decision making methods that consider the perceptions of decision...

متن کامل

Defining Neighborhood, Analysis of Two Different Approaches: Expert-oriented Approach of Theorists and Perceptual Approach of Residents

The different offered definitions of “Neighborhood Unit” in various approaches and viewpoints by different specialized views, on the one hand; and excessive attention to expert-oriented and reduced viewpoints as well as neglecting residents’ perception of the neighborhood on the other hand necessitate providing a complete and exact definition of neighborhood which can cover all approaches a...

متن کامل

Evaluation technique of perceptual qualities of an urban corridor Noghan Bazaar, Mashhad

It is claimed that perception is the result of objective measurement and subjective reaction, when people immerse in an environment. This statement is the main theme of this paper. This study tries to set a framework in analyzing built environment that is to comply with human perceptive processes. The approach of this article presents a methodology to assess the perceptual environment . The pa...

متن کامل

The effect of bio rhythmic states of traders on their perceptual decisions in Tehran Stock Exchange

The aim of this study is to find the effect of trader’s biorhythmic cycles on their perceptual decisions in Tehran Stock Exchange. According to author’s point of view, decision making is one of the major factors of management skills, and in some cases, it is equivalent of management (Simon’s point of view). Also considering this fact that the knowledge of psychological factors and it’s effects ...

متن کامل

Toward a Perceptual Symbol System

We explore the possibility for a situated system to evolve what Barsalou calls a perceptual symbol system (PSS). We describe the peculiarities of perceptual symbols and point out the main capabilities of organized, multimodal frames of perceptual symbols called simulators. We present a case study in which perceptual symbols and simulators are evolved and exploited for categorization, prediction...

متن کامل

آیا مواجهه دیداری با نماد بدنمند بهبودی، سبب تسکین درد می‌شود؟

The current pain management interventions are emphasizing on conscious regulation of pain-related behaviors and emotions. In contrast, novel interventions like priming-based ones suggest that behavior- and emotion-regulation happens without conscious selection. Based on the perceptual symbol systems theory, this study investigated the impact of non-conscious exposure to a visual symbol of heali...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1999